43 research outputs found

    My Guess is Better Than Yours

    Get PDF

    Modeling the Syntax of the song of the Great Reed Warbler Faculty of Engineering, LTH

    Get PDF
    The song of many songbirds can be thought of as consisting of variable sequences of a finite set of syllables. A common approach in understanding the structure of these songs is to model the syllable sequences with a Markov Model. The Markov Model can either allow one-to-one (Markov Chain), many-to-many (Hidden Markov Model) or many-to-one (Partially Observed Markov Model) state to syllable mappings. In this project the song of the Great Reed Warbler is being studied in terms of the syllable sequences (strophes) being generated. It is shown that the Markov chain captures a lot of the structure in the song in the sense that it to large degree reproduces the syllable distributions at a specific position in the song that were observed in data. The repetition distribution for some syllable classes was consistent with that of a Markov chain while other syllable classes were better modeled by allowing the self-transition probability to be adapted as the syllable class is repeated more and more. Still some other syllable classes did not have their repetition distributions accurately captured by these two alternatives

    Variable Splitting Methods for Constrained State Estimation in Partially Observed Markov Processes

    Full text link
    In this paper, we propose a class of efficient, accurate, and general methods for solving state-estimation problems with equality and inequality constraints. The methods are based on recent developments in variable splitting and partially observed Markov processes. We first present the generalized framework based on variable splitting, then develop efficient methods to solve the state-estimation subproblems arising in the framework. The solutions to these subproblems can be made efficient by leveraging the Markovian structure of the model as is classically done in so-called Bayesian filtering and smoothing methods. The numerical experiments demonstrate that our methods outperform conventional optimization methods in computation cost as well as the estimation performance.Comment: 3 figure

    Probabilistic Exponential Integrators

    Full text link
    Probabilistic solvers provide a flexible and efficient framework for simulation, uncertainty quantification, and inference in dynamical systems. However, like standard solvers, they suffer performance penalties for certain stiff systems, where small steps are required not for reasons of numerical accuracy but for the sake of stability. This issue is greatly alleviated in semi-linear problems by the probabilistic exponential integrators developed in this paper. By including the fast, linear dynamics in the prior, we arrive at a class of probabilistic integrators with favorable properties. Namely, they are proven to be L-stable, and in a certain case reduce to a classic exponential integrator -- with the added benefit of providing a probabilistic account of the numerical error. The method is also generalized to arbitrary non-linear systems by imposing piece-wise semi-linearity on the prior via Jacobians of the vector field at the previous estimates, resulting in probabilistic exponential Rosenbrock methods. We evaluate the proposed methods on multiple stiff differential equations and demonstrate their improved stability and efficiency over established probabilistic solvers. The present contribution thus expands the range of problems that can be effectively tackled within probabilistic numerics

    Calibrated Adaptive Probabilistic ODE Solvers

    Full text link
    Probabilistic solvers for ordinary differential equations assign a posterior measure to the solution of an initial value problem. The joint covariance of this distribution provides an estimate of the (global) approximation error. The contraction rate of this error estimate as a function of the solver's step size identifies it as a well-calibrated worst-case error, but its explicit numerical value for a certain step size is not automatically a good estimate of the explicit error. Addressing this issue, we introduce, discuss, and assess several probabilistically motivated ways to calibrate the uncertainty estimate. Numerical experiments demonstrate that these calibration methods interact efficiently with adaptive step-size selection, resulting in descriptive, and efficiently computable posteriors. We demonstrate the efficiency of the methodology by benchmarking against the classic, widely used Dormand-Prince 4/5 Runge-Kutta method.Comment: 17 pages, 10 figures
    corecore